Stabilizing Policy Improvement for Large-Scale Infinite-Horizon Dynamic Programming

نویسندگان

  • Michael J. O'Sullivan
  • Michael A. Saunders
چکیده

Today’s focus on sustainability within industry presents a modeling challenge that may be dealt with using dynamic programming over an infinite time horizon. However, the curse of dimensionality often results in a large number of states in these models. These large-scale models require numerically stable solution methods. The best method for infinite-horizon dynamic programming depends on both the optimality concept considered and the nature of transitions in the system. Previous research uses policy improvement to find strong-present-value optimal policies within normalized systems. A critical step in policy improvement is the calculation of coefficients for the Laurent expansion of the present-value for a given policy. Policy improvement uses these coefficients to search for improvements of that policy. The system of linear equations that yields the coefficients will often be rank-deficient, so a specialized solution method for large singular systems is essential. We focus on implementing policy improvement for systems with substochastic classes (a subset of normalized systems). We present methods for calculating the present-value Laurent expansion coefficients of a policy with substochastic classes. Classifying the states allows for a decomposition of the linear system into a number of smaller linear systems. Each smaller linear system has full rank or is rank-deficient by one. We show how to make repeated use of a rank-revealing LU factorization to solve the smaller systems. In the rank-deficient case, excellent numerical properties are obtained with an extension of Veinott’s method [Ann. Math. Statist., 40 (1969), pp. 1635–1660] for substochastic systems.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Solving infinite horizon optimal control problems of nonlinear interconnected large-scale dynamic systems via a Haar wavelet collocation scheme

We consider an approximation scheme using Haar wavelets for solving a class of infinite horizon optimal control problems (OCP's) of nonlinear interconnected large-scale dynamic systems. A computational method based on Haar wavelets in the time-domain is proposed for solving the optimal control problem. Haar wavelets integral operational matrix and direct collocation method are utilized to find ...

متن کامل

SYSTEMS OPTIMIZATION LABORATORY DEPARTMENT OF MANAGEMENT SCIENCE AND ENGINEERING STANFORD UNIVERSITY STANFORD, CALIFORNIA 94305-4026 Stabilizing Policy Improvement for Large-Scale Infinite-Horizon Dynamic Programming

Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsors. Reproduction in whole or in part is permitted for any purposes of the United States Government. This document has been approved for public release and sale; its distribution is unlimited. Abstract. Today's focus on sustainabi...

متن کامل

Linear Nonquadratic Optimal Control - Automatic Control, IEEE Transactions on

We consider the optimization of nonquadratic measures of the transient response. We present a computational implementation of dynamic programming recursions to solve finite-horizon problems. In the limit, the finite-horizon performance converges to the infinite-horizon performance. We provide conditions based on finite-horizon computations which only assure that a receding horizon implementatio...

متن کامل

Risk-averse dynamic programming for Markov decision processes

We introduce the concept of a Markov risk measure and we use it to formulate risk-averse control problems for two Markov decision models: a finite horizon model and a discounted infinite horizon model. For both models we derive risk-averse dynamic programming equations and a value iteration method. For the infinite horizon problem we also develop a risk-averse policy iteration method and we pro...

متن کامل

Dynamic Policy Programming with Function Approximation

In this paper, we consider the problem of planning in the infinite-horizon discountedreward Markov decision problems. We propose a novel iterative method, called dynamic policy programming (DPP), which updates the parametrized policy by a Bellmanlike iteration. For discrete state-action case, we establish L∞-norm loss bounds for the performance of the policy induced by DPP and prove that it asy...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM J. Matrix Analysis Applications

دوره 31  شماره 

صفحات  -

تاریخ انتشار 2009